Performance testing

« Back to documentation home

Delays

There are several options available useful in performance testing.

HTTP fixed delays

Configurable via the UI. Just change the value of "Response delay" on the HTTP mapping form.
Edit mocked response delay

HTTP random delays

Configure your mapping file to contain:
...
    "response": {
            "status": 200,
            "delayDistribution": {
                    "type": "lognormal",
                    "median": 70,
                    "sigma": 0.3
            }
...

JMS delays

JMS delays are available, see the documentation for more details.

Native IBM® MQ delays

JMS delays are available, see the documentation for more details.

Performance profile

Performance profile allows for faster than usual responses in Traffic Parrot.

To enable the performance profile:
  1. use trafficparrot.performance.properties
  2. configure only the extensions and helpers that are in use in the mappings, by default all of them are disabled
  3. disable INFO logs (set log4j to WARN or ERROR only)
  4. configure the operating system
  5. Optional: run a cluster of Traffic Parrot instances behind a load balancer
  6. Optional: if you have any issues contact us
Please note that all mappings in performance mode are loaded into memory from the disk on load so you if you make changes directly on disk to any mappings please do not do it in performance mode.

Native IBM MQ performance monitoring

For Native IBM MQ you can enable additional logging that displays total processing time of messages.

Native IBM® MQ tuning

Here is a list of configuration parameters for tuning Traffic Parrot Native IBM® MQ.

Traffic Parrot properties:
  • trafficparrot.virtualservice.mapping.cache.milliseconds - how long to mapping cache the files in memory (default 0 means no caching)
  • trafficparrot.virtualservice.mapping.cache.populate.on.startup - should the mappings cache be pre-loaded on startup
Native IBM® MQ connection attributes:
  • readConnectionsToOpen - how many read connections to open to the queue manager via the specified channel
  • writeConnectionsToOpen - how many write connections to open to the queue manager via the specified channel
For example:
{
    "connectionId": "1",
    "connectionName": "Local Docker MQ 9",
    "connectionData": {
      "ibmMqVersion": "IBM_MQ_9",
      "hostname": "localhost",
      "port": 1414,
      "queueManager": "QM1",
      "channel": "DEV.APP.SVRCONN",
      "username": "app",
      "password": "",
      "useMQCSPAuthenticationMode": false,
      "readConnectionsToOpen": 5,
      "writeConnectionsToOpen": 5,
      "sslCipherSuite": null,
      "sslPeerName": null
    }
},
Native IBM® MQ attributes:
  • receiveThreads - how many threads to use for the given mapping to receive messages
  • sendThreads - how many threads to use for the given mapping to send messages
For example:
{
  "mappingId" : "3bc18f0b-9d95-4af1-a2f8-848210b2d8a1",
  "request" : {
    "destination" : {
      "name" : "DEV.QUEUE.1",
      "type" : "QUEUE"
    },
    "bodyMatcher" : {
      "anything" : "anything"
    }
  },
  "response" : {
    "destination" : {
      "name" : "DEV.QUEUE.2",
      "type" : "QUEUE"
    },
    "ibmMqResponseTransformerClassName" : "NO_TRANSFORMER",
    "format" : "MQFMT_STRING",
    "text" : "",
    "fixedDelayMilliseconds" : 0
  },
  "receiveThreads" : 5,
  "sendThreads" : 5
}

Low performance?

If you are observing lower than expected performance please contact us.

Please keep in mind that Traffic Parrot performance depends on:
  • speed of hardware or VM, the slower the hardware or VM the slower Traffic Parrot
  • Java version, the older the version the lower the performance
  • number of mappings, the higher number of mapping the lower the performance
  • complexity of dynamic responses, the higher the complexity the lower the performance
  • complexity of the request matching, an "any" matcher will be much faster than a complex one like regexp or jsonpath

Example performance benchmarks Native IBM® MQ

Summary

This benchmark demonstrates how hardware resources, network parameters and complexity of virtual services (mocks/stubs) impact Traffic Parrot version 5.12.0 performance for a few sample scenarios. In one example, we show how we improve from 6,000 TPS to 20,000 TPS by increasing the hardware resources and network capacity. In another example, we show that the complexity of the virtual services (mocks/stubs) results in the difference between 1,000 TPS and 6,000 TPS when running on the same hardware and network. Download benchmark PDF here of view the results in web browser below.

Benchmark results

Test setup Test results
4 vCPUs HDD 6GB heap 10 Gb/s network 16 vCPUs SSD 12GB heap 10 Gb/s network 16 vCPUs SSD 12GB heap 30 Gb/s network
Request to response mappings (transactions defined in the virtual service) Queues and queue managers TPS Processing latency (read request message, construct response message, write response message) TPS Processing latency (read request message, construct response message, write response message) TPS Processing latency (read request message, construct response message, write response message)
20 XML mappings, 100ms fixed delay, Dynamic (2 XPaths), Message size 490B, 1 send threads per queue, 1 receive threads per queue, 5 read connections per  QM, 5 write connections per QM, Non-transactional, non-persistent 20 queues 4 queue managers 6,022t/s

10,000,000 transactions
99% under 50.00ms
95% under 20.00ms
14,984t/s

10,000,000 test transactions
99% under 40.00ms
95% under 30.00ms
21,541t/s

10,000,000 test transactions

99% under 30.00ms, 95% under 20.00ms

20 XML mappings, No delay, Dynamic (2 XPaths), Message size 490B, 1 send threads per queue, 1 receive threads per queue, 5 read connections per  QM, 5 write connections per QM, Non-transactional, non-persistent 20 queues
4 queue managers
5,751t/s

10,000,000 transactions
99% under 30.00ms
95% under 20.00ms
13,425t/s

10,000,000 test transactions
99% under 50.00ms
95% under 30.00ms
19,321t/s

10,000,000 test transactions

99% under 30.00ms
95% under 20.00ms

15 XML mappings
fixed delays 100ms to 200ms
Dynamic (1 to 29 XPaths per message)
Message size 500B to 57kB
1-4 send threads depending on the queue
1-4 receive threads depending on the queue
18 read connections per QM
18 write connections per QM
Non-transactional, non-persistent
15 queues 2 queue managers 1,276t/s

3,080,000 transactions
99% under 10.00ms
95% under 10.00ms
4,180t/s

3,080,000 test transactions
99% under 10.00ms
95% under 10.00ms
4,472t/s

3,080,000 test transactions

99% under 10.00ms
95% under 10.00ms

Environment configuration 4 vCPUs - 6GB heap - 10 GB/s network

Testing Traffic Parrot version 5.12.0-RC1

IBM MQ 9.1.1.0

TP running on
  • GCP VM type n2-standard-4 (4 vCPUs, 16 GB memory)
  • Standard persistent disk
  • “europe-west2-c” GCP data center
TP configuration
  • Logging level = WARN
  • -Xmx6144m
Network
  • Using external IPs (10 Gbits/sec tested with iperf)
Four IBM MQ queue managers each of them running on
  • GCP VM type n1-standard-1 (1 vCPU, 3.75 GB memory)
  • Standard persistent disk
  • “europe-west2-c” GCP data center

Environment configuration 16 vCPUs - 12GB heap - 10 Gb/s network

Testing Traffic Parrot version 5.12.0-RC1

IBM MQ 9.1.1.0

TP running on
  • GCP VM type c2-standard-16 (16 vCPUs, 64 GB memory)
  • SSD persistent disk
  • “us-central1-a” GCP data center
TP configuration:
  •  Logging level = WARN
  • -Xmx12g
Network
  • Using external IPs (10 Gbits/sec tested with iperf)
Four IBM MQ queue managers each of them running on
  • GCP VM type n2-highcpu-4 (4 vCPUs, 4 GB memory)
  • SSD persistent disk
  • “us-central1-a” GCP data center

Environment configuration 16 vCPUs - 12GB heap - 30 Gb/s network

Same as “16 vCPUs - 12GB heap - 10 Gb/s network” above but:

Network
  • Using internal IPs (30Gb/s tested with iperf)

Example performance benchmarks of HTTP

HTTP benchmark configuration

Tested on Google Cloud Platform C4 VMs (INTEL(R) XEON(R) PLATINUM 8581C CPU @ 2.30GHz) running with STANDARD network, hyperdisk-balanced 10GB.

Traffic Parrot properties:

2 CPUs 4 CPUs 8 CPUs 16 CPUs
Test request headers ask for gzipped content (Accept-Encoding: gzip) Yes
Log level ERROR
trafficparrot.virtualservice.jettyAcceptors 5 5 8 16
trafficparrot.virtualservice.containerThreads 500 500 1000 1000
Other properties trafficparrot.virtualservice.containerThreads=500
trafficparrot.virtualservice.maxRequestJournalEntries=0
trafficparrot.virtualservice.mapping.cache.milliseconds=48000000
trafficparrot.virtualservice.mapping.cache.populate.on.startup=true
trafficparrot.virtualservice.handlebars.select.indexAndCacheCsvFiles=true
trafficparrot.virtualservice.accessLogsEnabled=false
trafficparrot.virtualservice.handlebars.maxCacheEntries=0
trafficparrot.virtualservice.httpRequestLoggingDisabled=true

Traffic Parrot jvm.args:

-Xmx:(as per the table spec)
-Xms:1g
-XX:+UseG1GC
-XX:+PrintCommandLineFlags
-XX:+HeapDumpOnOutOfMemoryError
-Djdk.security.allowNonCaAnchor
-Dtrafficparrot.properties.filename=trafficparrot.properties
-Dtrafficparrotserver.logging.properties.filename=trafficparrotserver.log4j2.xml
-Djdk.serialFilter=*
-Djava.rmi.server.useCodebaseOnly=false
-Dcom.sun.management.jmxremote
-Djava.rmi.server.hostname=x.y.z.z
-Dcom.sun.management.jmxremote.port=8084
-Dcom.sun.management.jmxremote.rmi.port=8099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-XX:+UseLargePages

Traffic Parrot version 5.53.1 HTTP benchmark results

These benchmarks demonstrate that if you double the number of CPUs you can roughly double the performance you get out of Traffic Parrot, for the type mappings tested in this benchmark.

They also demonstrate you can use "tricks" to 5-15x your mocks performance. The example in the benchmark below shows that if you have a static response, and it's quite large, 500kB, you can gzip the response content inside the mapping. That way gzipping of the response body is not done every time a response is sent, and you can 5 or even 15x performance on low CPU machines.

If you would like to learn more performance "tricks" applicable to your performanmce testing use case, pelase contact us.

2 CPUs
2GB Heap
4 CPUs
4GB Heap
8 CPUs
4 GB Heap
16 CPUs
8 GB Heap
1 mapping with a static response 300B

200 parallel requests/threads
11815.2 req/s

Min Response Time: 0ms
Max Response Time: 1658ms
Avg Response Time: 16.20ms
95th Percentile Response Time: 45ms
99th Percentile Response Time: 108ms
Success Count: 1000000
Failure Count: 0
35621.4 req/s

Min Response Time: 0ms
Max Response Time: 1431ms
Avg Response Time: 4.79ms
95th Percentile Response Time: 14ms
99th Percentile Response Time: 32ms
Success Count: 1000000
Failure Count: 0
60193.8 req/s

Min Response Time: 0ms
Max Response Time: 1126ms
Avg Response Time: 2.09ms
95th Percentile Response Time: 6ms
99th Percentile Response Time: 15ms
Success Count: 1000000
Failure Count: 0
70452.3 req/s

Min Response Time: 0ms
Max Response Time: 1124ms
Avg Response Time: 1.57ms
95th Percentile Response Time: 4ms
99th Percentile Response Time: 7ms
Success Count: 1000000
Failure Count: 0
1 mapping with a static response 500kB

200 parallel requests/threads
171.0 req/s

Min Response Time: 0ms
Max Response Time: 6634ms
Avg Response Time: 719.19ms
95th Percentile Response Time: 2747ms
99th Percentile Response Time: 4138ms
Success Count: 99995
Failure Count: 5
352.8 req/s

Min Response Time: 1ms
Max Response Time: 4949ms
Avg Response Time: 385.43ms
95th Percentile Response Time: 1667ms
99th Percentile Response Time: 2678ms
Success Count: 99999
Failure Count: 1
663.2 req/s

Min Response Time: 1ms
Max Response Time: 2407ms
Avg Response Time: 193.52ms
95th Percentile Response Time: 771ms
99th Percentile Response Time: 1347ms
Success Count: 100000
Failure Count: 0
1333.8 req/s

Min Response Time: 1ms
Max Response Time: 2214ms
Avg Response Time: 77.97ms
95th Percentile Response Time: 322ms
99th Percentile Response Time: 724ms
Success Count: 100000
Failure Count: 0
1 mapping with a static response 500kB
already pre-gzipped in the mapping file

200 parallel requests/threads
3630.0 req/s

Min Response Time: 0ms
Max Response Time: 1772ms
Avg Response Time: 51.56ms
95th Percentile Response Time: 198ms
99th Percentile Response Time: 594ms
Success Count: 100000
Failure Count: 0
7647.0 req/s

Min Response Time: 0ms
Max Response Time: 1283ms
Avg Response Time: 24.40ms
95th Percentile Response Time: 73ms
99th Percentile Response Time: 183ms
Success Count: 100000
Failure Count: 0
13024.2 req/s

Min Response Time: 0ms
Max Response Time: 1202ms
Avg Response Time: 10.54ms
95th Percentile Response Time: 31ms
99th Percentile Response Time: 52ms
Success Count: 100000
Failure Count: 0
17067.8 req/s

Min Response Time: 0ms
Max Response Time: 1147ms
Avg Response Time: 3.36ms
95th Percentile Response Time: 11ms
99th Percentile Response Time: 27ms
Success Count: 100000
Failure Count: 0